
Round Overview
Discuss this match
SRM 502 was a interesting problem set that led to an exciting match. In division 1, many coders submitted solutions to at least two of the three problems and there were 5 submissions for the hard problem. The 500 points problem turned out to trick plenty of coders into thinking the solution was easier than it actually was. The challenge phase changed the setting entirely: Only a third of all the submissions for the 500 were correct and only Petr’s submission for the 1000 survived the challenge phase. It was also a good night for those with skills to find and predict mistakes in other coder’s solutions, 11600 challenge points were distributed among them. Notable examples include atol with 10 succesful challenges and also neal_wu, lqp18_31 and jialin which combined their challenging skills with solving skills to get very close second, third, and fourth places, respectively. But the first place had to still go for Petr for being the only one to solve the 1000-points problem correctly.
Division 2 was challenging, a quarter of the coders were not able to get a positive score. 11 coders were still able to solve all three problems. This time a significant speed difference when solving the 1000-points problem was what gave newcomer Goldiceage an advantage of almost 100 points over the second place, xptreeand the third place littlesheep2010.
TheProgrammingContestDivTwo
Problem Details
Used as: Division Two - Level One:
| Value | 250 |
| Submission Rate | 576 / 698 (82.52%) |
| Success Rate | 536 / 576 (93.06%) |
| High Score | Poor_Man for 249.98 points (0 mins 16 secs) |
| Average Score | 188.88 (for 536 correct submissions) |
This was an unusual optimization problem for the easiest problem slot. In optimization problems, we first need to make sure to understand what we want to optimize. In this case, we need to first maximize the score, and then to minimize the time penalty. It can also be useful to consider what the variables involved in making the decision are. This time we need to consider two factors:
Which problems to solve: It is not possible to solve all of the problems in the input. The sum of all the required times of the chosen problems must be smaller than or equal to T. Solving a problem may make us unable to solve another. The problem asks us to solve as many problems as possible.
The order in which the problems are solved: Example 2) shows that the order is important when minimizing the penalty.
When there are two things to do, it is usually easier to first consider them separately. Assume that we have already decided which problems have to be solved, we can assume that their sum of required times is <= T and that the number of problems solved is the maximum possible. We need to decide an order by which the penalty is as small as possible. The penalty is the sum of the times at which each problem is solved. The time at which a problem is solved is equal to the sum of all the required times to solve each of the problems solved before it plus the time required for the problem itself. This scheduling problem is a known, common problem and the solution is to solve the problem in ascending order of required time, if you already know the problem and/or know a demonstration for this greedy solution you may skip the following paragraphs.
For two problems A and B, which require Ra and Rb minutes respectively to be solved. There are two options: We can solve A before B, which makes the total penalty:
rA + (rA + rB)
Or, we instead solve B before A, the penalty is:
rB + (rA + rB)
If we want to assume that solving A before B will give the smallest penalty possible, then the penalty of solving A before B should be smaller than or equal to the penalty of solving B before A.
rA + (rA + rB) <= rB + (rA + rB)
2*rA + rB <= 2*rB + rA
rA <= rB
In order for solving A before B to result in the lowest penalty possible, we need the required time rA to be smaller than or equal to rB. It is also true for any value of n, that it is always better to solve the problem with the lowest required time first. Note we have already proven that it is true for n=2, the rest can be proven by induction and will be left as an exercise. A different style of proof for it can be found in the editorial for the problem BatchSystem from SRM 481.
Which problems to solve
We know the order. What about which problems to pick? We need to pick problems such that their sum of required times is <= T and we also need the amount of problems to be as large as possible. But there may be many different ways to pick X problems such that their sum is <= T, in some of them the penalty may be smaller than in the others.
This is another situation in which picking the smallest requiredTime first solves the problem. In order to prove it, imagine that the maximum amount of problems we can solve is k. Then there are k problems which added together give sum that is <= T. This can be rephrased as "The minimum sum of the required times of k problems is <= T. We need to prove that if we always pick the minimum requiredTime available, we will reach the minimum sum of k elements regardless of the value of k. This is true for k=1. For k=2, we can prove that picking the two smallest required times, we yield the smallest sum possible of k=2 elements. Eventually, we can conclude that to get the minimum sum of k elements, we need the k smallest elements. Because of this, if we always pick the smallest elements of requiredTime available, then the amount of problems solved will be as large as possible. There may be other ways to pick the same amount of problems in a way that the sum is <= Tbut the penalty rule requires that we pick the smallest ones.
The solution is finally to always solve the problem with the smallest requiredTime that has not been solved yet. Repeat until there is no time left.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
public int[] find(int T, int[] requiredTime) {
int n = requiredTime.length;
//Sort the problems by ascending requiredTime:
Arrays.sort(requiredTime, 0, n);
// Current time.
int t = 0;
// Current score and penalty.
int score = 0, penalty = 0;
int i = 0;
while ((i < n) && (t + requiredTime[i] <= T)) {
// if t+requiredTime[i] > T, then there are no
// problems left that can be solved before the end
// of the contest.
score++; //solve the problem
t += requiredTime[i]; //increase the current time
penalty += t; //increase the penalty
i++;
}
return new int[] {
score,
penalty
};
}
TheLotteryBothDivs
Problem Details
Used as: Division Two - Level Two:
| Value | 500 |
| Submission Rate | 398 / 698 (57.02%) |
| Success Rate | 232 / 398 (58.29%) |
| High Score | naranbayar_mon for 498.04 points (1 mins 47 secs) |
| Average Score | 305.05 (for 232 correct submissions) |
Used as: Division One - Level One:
| Value | 250 |
| Submission Rate | 532 / 541 (98.34%) |
| Success Rate | 428 / 532 (80.45%) |
| High Score | lindamon for 249.05 points (1 mins 45 secs) |
| Average Score | 194.31 (for 428 correct submissions) |
In this problem there is a total number of tickets. Of all the tickets, some win a prize. The probability to win a prize can be read as (number of winning tickets) / (total number of tickets). The total number of tickets is always 109. The reason it is always 109 is that there are 9 digits and each digit can be one of 10 numbers: 0,1,2,3,4,5,6,7,8,9. We can consider each digit as an event, and the events are independent, which means that the value picked for each digit does not depend on the other values. 10 options for digit 0, 10 options for digit 1, etc. When multiplying all the options together, we have: 10 x 10 x 10 … 9 times = 109.
Since we know the total number of tickets, all we need in order to get the probability is to count the total number of winning tickets. We need to count the number of tickets that match at least one of the suffixes provided in goodSuffixes. Let us begin with a single suffix: If this suffix was “123”, how many tickets have “123” as a suffix? Of the 9 digits available for the ticket, the last 3 must have fixed digit values. The remaining 9-3 = 6 digits may still have any of the 10 available digit values. The result is, like what we did in the previous paragraph, 106. There are 106 tickets that finish in “123”. If the suffix was “766”, the result would not change, there are still 6 digits that may have any of the 10 values. The result actually depends only on the size of the suffix. If the size of the suffix is x, then there are (9-x) digits that may have any value from 0 to 9. The number of tickets that have a suffix of size x is therefore 109-x.
There may be more than one ticket in the input. Suffixes are usually mutually exclusive, which means that a ticket cannot have the suffixes at the same time. An example of mutually-exclusive suffixes would be “123” and “425”: It is impossible for a ticket to simultanously finish with 124 and 425 at the same time. If we assume that all suffixes in the input are mutually exclusive, then we can simply add together the amount of tickets each of the suffixes represents to get the total amount of prized tickets. However, the suffixes are not always mutually exclusive. An easy example is the third one from the problem statement, where the suffixes are “47” and “47”. In that case, it is easy to see that a ticket may have both suffixes at the same time. There is a simple workaround: Ignore repeated suffixes. Example 3) contains a different case: “4747” and “47”. Every ticket that ends with 4747 will also end with 47. It is possible to only consider 47, when counting the tickets that end with 47 we will also implicitly include the tickets that end with 4747, because 47 is a suffix of 4747. We can generalize this by saying that if there are two suffixes i and j in the input and goodSuffix[i] is a suffix of goodSuffix[j], then we need to ignore goodSuffix[j] when counting. We can claim that suffixes are always mutually exclusive unless one is a suffix of the other. As long as we handle the case in which a good suffix is a suffix of another good suffix and also the case with repeated suffixes (by removing the unnecessary ones) the total number of good tickets can be calculated by adding together the amounts of tickets for each suffix. Note that although two equal strings are suffixes of each other, the case in which a good suffix has duplicates is handled differently to the one in which a good suffix is a suffix of another. Therefore, we need to consider only proper suffixes when considering the latter. (A is a proper suffix of B, if A is a suffix of B and A is not equal to B).
We can sum up the solution in four steps:
Remove duplicated suffixes.
Remove unnecessary suffixes (For every suffix i, if there exists a suffix j such that goodSuffix[j] is a proper suffix of goodSuffix[j], remove suffix i).
For each remaining suffix i: Add 109-length(goodSuffix[i]) to the count of good tickets.
The probability to win a prize is: (amount of good tickets)/(total amount of tickets).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
//Returns true if a is a proper suffix of b
bool isProperSuffix(string a, string b) {
if (b.length() <= a.length()) {
return false;
}
return (b.substr(b.length() - a.length()) == a);
}
double find(vector < string > goodSuffixes) {
int n = goodSuffixes.size();
//For the implementation, we will mark suffixes as
//removed instead of actually removing them.
bool removed[n];
fill(removed, removed + n, false);
for (int i = 0; i < n; i++) {
//Remove duplicate suffixes:
for (int j = 0; j < i; j++) {
if (goodSuffixes[i] == goodSuffixes[j]) {
removed[i] = true;
}
}
//Remove unnecessary suffixes:
for (int j = 0; j < n; j++) {
if (isProperSuffix(goodSuffixes[j], goodSuffixes[i])) {
removed[i] = true;
}
}
}
double good = 0.0;
//Count good tickets
for (int i = 0; i < n; i++) {
if (!removed[i]) {
good += pow(10.0, 9.0 - goodSuffixes[i].length());
}
}
double total = 1e9;
//The probability:
return good / total;
}
Used as: Division Two - Level Three:
| Value | 1000 |
| Submission Rate | 40 / 698 (5.73%) |
| Success Rate | 24 / 40 (60.00%) |
| High Score | Goldiceage for 921.43 points (8 mins 26 secs) |
| Average Score | 624.74 (for 24 correct submissions) |
The problem asks us to count all possible sets of K numbers from 0 to N-1 that would have a total sum that is a multiple of N. It may be possible to consider all the available possibilities and count them. We will head towards a dynamic programming solution because it is a counting problem and it often pays to try to imagine a dynamic programming solution in those problems when constraints are large. If we want to try dynamic programming, we first need a recurrence. We will begin considering the overall case: We have a list of number from 0 to N-1, we need to pick K of them and the sum of the picked elements must be be a multiple of N. For now, our function takes a list of numbers List, and the amount of numbers to pick K.
When we begin the recurrence, we have some numbers in the List, we need to decide which of them goes to the set of K elements. We will begin with the first number in the List: List[0]. If we do not add List[0] to the set, then K stays the same but we should remove List[0] from the list, we can call the recurrence again with the new list and the value of K that will give us the amount of sets of K elements that do not include List[0] that have a sum that is a multiple of N. If instead, we want List[0] to go to the set, then we should decrease K by 1, because one of the K numbers has been decided. We also need to again remove List[0] from the list so that we do not consider it anymore. However, we are no longer interested in the number of sets of K-1 elements of the new list that have a sum equal to a multiple of N. We will add List[0] to the sum of the numbers. So we want the number of sets of K-1 elements from the updated List that have a sum S such that : (S+List[0] is a multiple of N).
Whenever we read “A is a multiple of B” we can read it as : “When we divide A by B, the remainer is 0” or “A = 0 modulo B”. We will use the latter, because modular arithmetic may be useful in this case since (S+List[0] = 0 mod N) is actually an equation. The result of the equation is:
S + List[0] = 0 mod N
S = -List[0] mod N
S = (N-List[0]) mod N
In effect, we will need to count the number of sets of K-1 elements that have a sum that is equal to (N-List[0]) modulo N. The first step of our recurrence, needed sums that are multiples of N, which is the same as saying sums that are equal to 0 modulo N. The required modulo of the sum can become an argument to the recurrence. Then when we decide to use List[0], the required modulo S becomes:
S' = (S - List[0]) mod N
Our recurrence is then: countSets(List, K, S) - The number of sets of K elements from List that yield a total sum that is equal to S modulo N. The solution to the main problem is: countSets([0, 1, …, N-1], K, 0), because we want the number of sets of K unique numbers from 0 to N-1 that have a sum equal to 0 modulo N (multiple of N).
1
2
3
4
function countSets(List, K, S) {
List ' = remove List[0] from List.
return countSets(List ', K-1, (S-List[0]) mod N) + countSets(List', K, S)
}
What the recurrence does is count the sets that begin with List[0] that follow the rule, and then add the number of sets that do not begin with List[0]. A recurrence needs a base case. When K=0, then the only result is an empty set {}, the sum of elements of the empty set is 0. 0 is always a multiple of N, always (0 mod N). Therefore, if S=0 and K=0 then there is exactly one set that has the result and if K=0 and S!=0, then it is not possible to do it. Another thing, is when the list is empty and K is not 0, there are no elements left that we could add to have K elements, the result is always 0.
Removing elements from a list tends to be a very taxing operation and it is also complicated to implement. In order to simplify it, note that the original list is 0,1,…N-1. We always remove elements from the list in each step. So, the first list is [0,1,…N-1], the list given to the sub-instances is [1,2,…,N-1], then [2,…N-1]. We can describe a list by its starting number p. So if p=0, then the list is 0,1,…,N-1, if p=6, then the list is 6,7,…,N-1. List[0] is simply equal to p and the process of removing the first element of the List is as simple as increasing p
The current recurrence is:
1
2
3
4
5
6
7
8
9
10
11
12
13
function countSets(p, K, S) {
if (K == 0) {
if (S == 0) {
return 1;
} else {
return 0;
}
}
if (p >= N) { //no more elements in the list to use
return 0;
}
return countSets(p + 1, K - 1, (S - p) mod N) + countSets(p + 1, K, S)
}
The recurrence is acyclic because it always calls sub-instances of it with greater values of p. p can be a number from 0 to N. So there are N+1 possible values of p. K starts equal to the original value of K in the statement and is then decreased until it gets to 0. K+1 values in total. S will always be a value modulo N, values modulo N can only be numbers from 0 to N-1, so there are N possible values of S. Then there are at most (N+1) * (K+1) * N possible argument combinations we can give to that function. Each instance of the function does individually O(1) operations (call sub-instances, and do a couple of conditions checks). If we use memoization to make sure the function is evaluated at most once for each argument combination, then the running time complexity is O( NKN), remember that N<=1000 and K<=47. This time complexity is actually good enough for the 2 seconds time limit in TopCoder’s computers. Unfortunately, since we used memoization, we also need O(NKN) memory - actually 4*(N+1)(K+1)(N+1) bytes = 4100148*1000 bytes ~= 183 Megabytes. The memory limit in TopCoder is 64 MB.
The last trick to get this solution working is to consider that in the recurrence, we only call the sub-instances that have p = p + 1. If we calculate the results of the recurrence in descending order of p we will only need to remember the states for (p = p + 1). By doing this, We will only need O(K * N) memory. This can be understood better if we modify that recurrence to instaed be a iterative dynamic programming solution:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
table[N + 1][K + 1][N] // Arguments are: [p][k][S]
for p = N to 0 {
for k = 0 to K {
for S = 0 to N - 1 {
if (k == 0) {
if (S == 0) {
table[p][k][s] = 1
} else {
table[p][k][s] = 0
}
} else if (p == N) {
table[p][k][s] = 0
} else {
table[p][k][s] = table[p + 1][k - 1][(S - p) mod N] + table[p + 1][k][S]
}
}
}
}
That is the iterative way to calculate the complete recurrence. This is the place where we note that instead of needing a [N+1] for the table array, we only need two dimensions, because we always need to remember the results for p+1. An easy way to do this is to instead of using [p], use [p%2] and instead of using [p+1] use [(p+1)%2] when accessing the table array. This way all even values of p will share the same space and all odd values of p will also share their same space. Since p%2 and (p+1)%2 may only be 0 or 1, we can change the first dimension of the table array to [2]. The following is the final c++ implementation of the optimized recurrence. The result when using that recurrence is very large, but since the problem asks for it modulo 1000000007, we can get that modulo after every addition operation because (a+b)%M is equal to (a%M) + (b%M). Also note that in order to calculate (s-p) mod N, (s-p)%N is not enough in c++ because negative numbers behave differently in c++ than in modular arithmetic, to fix it it is useful to add N to negative numbers before using the % operator.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
int find(int N, int K) {
//Note that memory is now only O(K*N).
int table[2][K + 1][N];
for (int p = N; p >= 0; p--) {
for (int k = 0; k <= K; k++) {
for (int s = 0; s < N; s++) {
if (k == 0) {
if (s == 0) {
table[p % 2][k][s] = 1;
} else {
table[p % 2][k][s] = 0;
}
} else if (p == N) {
table[p % 2][k][s] = 0;
} else {
table[p % 2][k][s] = table[(p + 1) % 2][k - 1][(s - p + N) % N] //Use p
+
table[(p + 1) % 2][k][s]; //Don't use p
//We only need the result Modulo 1000000007
table[p % 2][k][s] %= 1000000007;
}
}
}
}
// The result is to call the recurrence when
// p = 0, k = K, and S=0
return table[0 % 2][K][0];
}
TheProgrammingContestDivOne
Problem Details
Used as: Division One - Level Two:
| Value | 500 |
| Submission Rate | 355 / 541 (65.62%) |
| Success Rate | 106 / 355 (29.86%) |
| High Score | zhouerjin for 480.92 points (5 mins 42 secs) |
| Average Score | 295.75 (for 106 correct submissions) |
As with the TheProgrammingContestDivTwo problem, there are two decisions to consider: Which programs to solve and the order in which we solve them. Weights and initial scores make the problem more interesting, but it is still valid to separate the question in two parts. Note that in order to avoid very long words in the explanation, we will change pointsPerMinute P, requiredTime R and maxPoints M
What is the optimal order?
Assume that we have already decided which problems to solve. We need to find the optimal order. Assume that the chosen tasks are ordered from 1 to K. The formula for the total score is:
M[1] - P[1]*T[1] + M[2] - P[2]*T[2] + ... + M[K] - P[K]*T[K]
Where T[i] is the time at which we solve task i. The first thing that we need to notice is that since the chosen tasks are fixed, the maximum points M[i], do not change regardless of the time at which each task is solved. We can group the M[i]s together:
M[1]+M[2]+...+M[K] - P[1]*T[1] - P[2]*T[2] - ... - P[K]*T[K]
Since the sum of M[i]s will always be the same regardless of the order, we only need to maximize the following expression:
- P[1]*T[1] - P[2]*T[2] - ... - P[K]*T[K]
They are all subtraction operations. Maximizing that expression is the same as minimizing:
P[1]*T[1] + P[2]*T[2] + ... + P[K]*T[K]
The solved times T[i] depend on the order at which we solve them. Assume that we solve the tasks in ascending order, then the expression is equal to:
P[1]*(R[1]) + P[2]*(R[1]+R[2]) + ... + P[K]*(R[1]+R[2]+...+R[K])
In other words, if we solve task #2 after task #1, then we need to wait R[1] minutes to solve task 1, and (R[1]+R[2]) minutes to solve task #2. To solve the last task, we need the total amount of required minutes.
We need to find a rule that would help us find the optimal order for any amount of chosen tasks. Let us begin with simple cases. When two tasks are chosen, what is the optimal order? Consider that we have two tasks a and b. Then there are two orders to try:
P[a]*(R[a]) + P[b]*(R[a]+R[b]) (solve a before b)
P[b]*(R[b]) + P[a]*(R[a]+R[b]) (solve b before a)
If we assume that solving a before b yields the optimal penalty, then we will assume that the first expression is less than or equal to the second expression:
P[a]*(R[a]) + P[b]*(R[a]+R[b]) <= P[b]*(R[b]) + P[a]*(R[a]+R[b])
P[a]*R[a] + P[b]*R[a]+P[b]*R[b]<= P[b]*R[b] + P[a]*R[a] + P[a]*R[b]
P[b]*R[a] <= P[a]*R[b]
R[a]/P[a] <= R[b]/P[b]
Therefore, if solving a before b is optimal then R[a]/P[a] should be <= R[b]/P[b]. Note that we can also begin with (R[a]/P[a] <= R[b]/P[b]) and go upwards to conclude that this is a condition both necessary and sufficient for the order “a before b” to be optimal. We have found that for n=2, then it is best to solve the task with the minimum R[i]/P[i] first. For larger values of n, we can try consecutive indexes i, i+1 and do a similar proof when deciding whether to put a in index i and b in index i+1 or vice versa:
P[1]*R[1] + P[2]*(R[1]+R[2]) + ... + P[i-1]*(R[1]+R[2]+...+R[i-1])
+ P[a]*(R[1]+R[2]+...+R[i-1]+R[a]) + P[b]*(R[1]+R[2]+...+R[i-1]+R[a]+R[b])
+ .... + P[K]*(R[1]+R[2]+...+R[i-1]+R[a]+R[b]+...R[K])
<=
P[1]*R[1] + P[2]*(R[1]+R[2]) + ... + P[i-1]*(R[1]+R[2]+...+R[i-1])
+ P[b]*(R[1]+R[2]+...+R[i-1]+R[b]) + P[a]*(R[1]+R[2]+...+R[i-1]+R[b]+R[a])
+ .... + P[K]*(R[1]+R[2]+...+R[i-1]+R[b]+R[a]+...R[K])
--------------------------------------------------------------------------------
P[a]*(R[1]+R[2]+...+R[i-1]+R[a]) + P[b]*(R[1]+R[2]+...+R[i-1]+R[a]+R[b])
<=
P[b]*(R[1]+R[2]+...+R[i-1]+R[b]) + P[a]*(R[1]+R[2]+...+R[i-1]+R[b]+R[a])
--------------------------------------------------------------------------------
P[a]*(R[1]+R[2]+...+R[i-1]) + P[a]*R[a] + P[b]*(R[1]+R[2]+...+R[i-1]) + P[b]*R[a] + P[b]*R[b]
<=
P[b]*(R[1]+R[2]+...+R[i-1]) + P[b]*R[b] + P[a]*(R[1]+R[2]+...+R[i-1]) + P[a]*R[b] + P[a]*R[a]
--------------------------------------------------------------------------------
P[b] * R[a] <= P[a] * R[b]
R[a] / P[a] <= R[b] / P[b]
It is the same result, for every two consecutive indexes i, i+1 the task with the minimum R/P must be put first. We can then conclude that for every i: R[i]/P[i] <= R[i+1]/P[i+1]. Which allows us to conclude that the optimal order is to sort the tasks ascending by R[i]/P[i].
Which problems to solve
It turns out that the rule for the order of the problems is very simple: Sort by R[i]/P[i]. It is still necessary to select the subset of problems to solve. However, the new information about the order will come handy. Since we are optimizing the score, we can directly assume that the problems will be sorted by R[i]/P[i]. Then, when deciding which task to add, we can do it in that same order. The first task, task 0 will be the one with the minimum R[i]/P[i], and after deciding to add or not this task, we will not have to worry about task 0 any more.
Imagine the current time is t and that for every task with an index lower than i, we have made a decision to add it or not and in case we decided to solve it, we increased t accordingly. We want to decide whether to solve task # i or not. If we solve task i, the time will increase to ( t + R[i]), the score will increase by ( M[i]-P[i]*(t + R[i]) ) and then we will continue to task i+1 and continue making decisions. In case we decide not to solve task # i, nothing will change but we will skip to task i+1 and continue making decisions. That is the basis for a recurrence, but instead of keeping the current score as an argument, we will return the maximum score that can be made with tasks greater than or equal to i. The recurrence is a function F( t, i) that returns the maximum score that can be made by solving tasks with indexes greater than or equal to i if the current time is t. The two decisions can be summed up as:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
F(t, i) {
if (i == n) {
// no more tasks to solve.
return 0.
}
s = 0
if (t + R[i] <= T) {
s = M[i] - P[i] * (t + R[i]) // solving i yields this score
s = s + F(t + R[i], i + 1) //we can still attempt to solve
//more tasks after solving i
}
s = max(s, F(t, i + 1)) //Do not solve task i, move forward
return s
}
The solution to the problem is F(t = 0, i = 0) when the time used so far is 0 and we have not yet decided about the first task. This recurrence is acyclic which makes it suitable for memoization or dynamic programming. In this recurrence, t can be as large as T, which is constrained as 100000. i can be as large as n, which is at most 50. There are at most 50*100000 = 5000000 argument combinations that can be used for this recurrence. The recurrence does O(1) operations besides of calling itself. So, if we use memoization or dynamic programming to make sure each state of the function is evaluated at most once, the total time complexity is O(T * n) which is fast enough for the 2 seconds time limit. The memory it will take is 5000000 * 4 bytes or 5000000 * 8 bytes if 64-bits integers are in use, that is 19 or 38 MB which are lower than the memory limits. Note however, that the stack in c++ is not as large as the memory limit, so you may need to create the memoization/dynamic programming array in the heap when using c++. Otherwise, the program should within time and space limits. A Java implementation which uses iterative dynamic programming to implement the recurrence follows:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
public int find(int T, int[] M, int[] P, int[] R) {
//We use 64 bit integers because high values of
// P and R may overflow when multiplied.
int n = M.length;
//sort by R[i]/P[i] (using bubble sort for the sake of simplicity)
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
// We want to sort by R/P which translates to:
// R[i]/P[i] < R[j]/P[j]
// doubles can be problematic to use in comparison,
// we can get rid of the requirement for doubles if
// we use:
if (!((long) R[i] * P[j] < (long) R[j] * P[i])) {
// R[i]/P[i] is not < R[j]/P[j] swap them:
int tm;
tm = R[i];
R[i] = R[j];
R[j] = tm;
tm = M[i];
M[i] = M[j];
M[j] = tm;
tm = P[i];
P[i] = P[j];
P[j] = tm;
}
}
}
long[][] dp = new long[n + 1][T + 1];
// the i==n case will be set to 0 by default.
for (int i = n - 1; i >= 0; i--) {
for (int t = 0; t <= T; t++) {
long res = 0;
if (t + R[i] <= T) {
// use it:
res = M[i] - (long) P[i] * (t + R[i]);
res += dp[i + 1][t + R[i]];
}
// do not use it:
res = Math.max(res, dp[i + 1][t]);
dp[i][t] = res;
}
}
// The result is when the recurrence starts with t=0, and i=0
return (int) dp[0][0];
}